Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 1 week 3 days ago

Installing Nextcloud 20 on Fedora Linux with Podman

Monday 15th of February 2021 08:00:00 AM

Nowadays, many open source projects offer container images for easy deployment. This is very handy when running a home server or lab environment. A previous Fedora Magazine article covered installing Nextcloud from the source package. This article explains how you can run Nextcloud on Fedora 33 as a container deployment with Podman.

What is Nextcloud?

Nextcloud started in 2016 as a fork of Owncloud. Since then, it evolved into a full-fledged collaboration software offering file-, calendar-, and contact-syncing, plus much more. You can run a simple Kanban Board in it or write documents collaboratively. Nextcloud is fully open source under the AGPLv3 License and can be used for private or commercial use alike.

What is Podman?

Podman is a container engine for developing, managing, and running OCI Containers on your Linux System. It offers a wide variety of features like rootless mode, cgroupv2 support, pod management, and it can run daemonless. Furthermore, you are getting a Docker compatible API for further development. It is available by default on Fedora Workstation and ready to be used.

In case you need to install podman, run:

sudo dnf install podman Designing the Deployment

Every deployment needs a bit of preparation. Sure, you can simply start a container and start using it, but that wouldn’t be so much fun. A well-thought and designed deployment should be easy to understand and offer some kind of flexibility.

Container / Images

First, you need to choose the proper container images for the deployment. This is quite easy for Nextcloud, since it offers already a pretty good documentation for container deployments. Nextcloud supports two variations: Nextcloud Apache httpd (which is fully self-contained) and Nextcloud php-fpm (which needs an additional nginx container).

In both cases, you also need to provide a database, which can be MariaDB (recommended) or PostgreSQL (also supported). This article uses the Apache httpd + MariaDB installation.

Volumes

Running a container does not persist data you create during the runtime. You perform updates by recreating the container. Therefore, you will need some volumes for the database and the Nextcloud files. Nextcloud also recommends you put the “data” folder in a separate volume. So you will end up with three volumes:

  • nextcloud-app
  • nextcloud-data
  • nextcloud-db
Network

Lastly, you need to consider networking. One of the benefits of containers is that you can re-create your deployment as it may look like in production. Network segmentation is a very common practice and should be considered for a container deployment, too. This tutorial will not add advanced features like network load balancing or security segmentation. You will need only one network which you will use to publish the ports for Nextcloud. Creating a network also provides the dnsname plugin, which will allow container communication based on container names.

The picture

Now that every single element is prepared, you can put these together and get a really nice understanding of how the development will look.

Run, Nextcloud, Run

Now you have prepared all of the ingredients and you can start running the commands to deploy Nextcloud. All commands can be used for root-privileged or rootless deployments. This article will stick to rootless deployments.

Sart with the network:

# Creating a new network $ podman network create nextcloud-net # Listing all networks $ podman network ls # Inspecting a network $ podman network inspect nextcloud-net

As you can see in the last command, you created a DNS zone with the name “dns.podman”. All containers created in this network are reachable via “CONTAINER_NAME.dns.podman”.

Next, optionally prepare your volumes. This step can be skipped, since Podman will create named volumes on demand, if they do not exist. Podman supports named volumes, which it creates in special locations, so you don’t need to take care of SELinux or alike.

# Creating the volumes $ podman volume create nextcloud-app $ podman volume create nextcloud-data $ podman volume create nextcloud-db # Listing volumes $ podman volume ls # Inspecting volumes (this also provides the full path) $ podman volume inspect nextcloud-app

Network and volumes are done. Now provide the containers.

First, you need the database. According to the MariaDB image documentation, you need to provide some additional environment variables,. Additionally, you need to attach the created volume, connect the network, and provide a name for the container. Most of the values will be needed in the next commands again. (Note that you should replace DB_USER_PASSWORD and DB_ROOT_PASSWORD with unique passwords.)

# Deploy Mariadb $ podman run --detach --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD --volume nextcloud-db:/var/lib/mysql --network nextcloud-net --restart on-failure --name nextcloud-db docker.io/library/mariadb:10 # Check running containers $ podman container ls

After the successful start of your new MariaDB container, you can deploy Nextcloud itself. (Note that you should replace DB_USER_PASSWORD with the password you used in the previous step. Replace NC_ADMIN and NC_PASSWORD with the username and password you want to use for the Nextcloud administrator account.)

# Deploy Nextcloud $ podman run --detach --env MYSQL_HOST=nextcloud-db.dns.podman --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env NEXTCLOUD_ADMIN_USER=NC_ADMIN --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD --volume nextcloud-app:/var/www/html --volume nextcloud-data:/var/www/html/data --network nextcloud-net --restart on-failure --name nextcloud --publish 8080:80 docker.io/library/nextcloud:20 # Check running containers $ podman container ls

Now that the two containers are running, you can configure your containers. Open your browser and point to “localhost:8080” (or another host name or IP address if it is running on a different server).

The first load may take some time (30 seconds) or even report “unable to load”. This is coming from Nextcloud, which is preparing the first run. In that case, wait a minute or two. Nextcloud will prompt for a username and password.

Enter the user name and password you used previously.

Now you are ready to go and experience Nextcloud for testing, development ,or your home server.

Update

If you want to update one of the containers, you need to pull the new image and re-create the containers.

# Update mariadb $ podman pull mariadb:10 $ podman stop nextcloud-db $ podman rm nextcloud-db $ podman run --detach --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD --volume nextcloud-db:/var/lib/mysql --network nextcloud-net --restart on-failure --name nextcloud-db docker.io/library/mariadb:10

Updating the Nextcloud container works exactly the same.

# Update Nextcloud $ podman pull nextcloud:20 $ podman stop nextcloud $ podman rm nextcloud $ podman run --detach --env MYSQL_HOST=nextcloud-db.dns.podman --env MYSQL_DATABASE=nextcloud --env MYSQL_USER=nextcloud --env MYSQL_PASSWORD=DB_USER_PASSWORD --env NEXTCLOUD_ADMIN_USER=NC_ADMIN --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD --volume nextcloud-app:/var/www/html --volume nextcloud-data:/var/www/html/data --network nextcloud-net --restart on-failure --name nextcloud --publish 8080:80 docker.io/library/nextcloud:20

That’s it; your Nextcloud installation is up-to-date again.

Conclusion

Deploying Nextcloud with Podman is quite easy. After just a couple of minutes, you will have a very handy collaboration software, offering filesync, calendar, contacts, and much more. Check out apps.nextcloud.com, which will extend the features even further.

Network address translation part 2 – the conntrack tool

Friday 12th of February 2021 08:00:00 AM

This is the second article in a series about network address translation (NAT). The first article introduced how to use the iptables/nftables packet tracing feature to find the source of NAT-related connectivity problems. Part 2 introduces the “conntrack” command. conntrack allows you to inspect and modify tracked connections.

Introduction

NAT configured via iptables or nftables builds on top of netfilters connection tracking facility. The conntrack command is used to inspect and alter the state table. It is part of the “conntrack-tools” package.

Conntrack state table

The connection tracking subsystem keeps track of all packet flows that it has seen. Run “sudo conntrack -L” to see its content:

tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1
tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1
udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1

Each line shows one connection tracking entry. You might notice that each line shows the addresses and port numbers twice and even with inverted address and port pairs! This is because each entry is inserted into the state table twice. The first address quadruple (source and destination address and ports) are those recorded in the original direction, i.e. what the initiator sent. The second quadruple is what conntrack expects to see when a reply from the peer is received. This solves two problems:

  1. If a NAT rule matches, such as IP address masquerading, this is recorded in the reply part of the connection tracking entry and can then be automatically applied to all future packets that are part of the same flow.
  2. A lookup in the state table will be successful even if its a reply packet to a flow that has any form of network or port address translation applied.

The original (first shown) quadruple stored never changes: Its what the initiator sent. NAT manipulation only alters the reply (second) quadruple because that is what the receiver will see. Changes to the first quadruple would be pointless: netfilter has no control over the initiators state, it can only influence the packet as it is received/forwarded. When a packet does not map to an existing entry, conntrack may add a new state entry for it. In the case of UDP this happens automatically. In the case of TCP conntrack can be configured to only add the new entry if the TCP packet has the SYN bit set. By default conntrack allows mid-stream pickups to not cause problems for flows that existed prior to conntrack becoming active.

Conntrack state table and NAT

As explained in the previous section, the reply tuple listed contains the NAT information. Its possible to filter the output to only show entries with source or destination nat applied. This allows to see which kind of NAT transformation is active on a given flow. “sudo conntrack -L -p tcp –src-nat” might show something like this:

tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED]

This entry shows a connection from 10.0.0.10:5536 to 10.8.2.12:80. But unlike the previous example, the reply direction is not just the inverted original direction: the source address is changed. The destination host (10.8.2.12) sends reply packets to 192.168.1.2 instead of 10.0.0.10. Whenever 10.0.0.10 sends another packet, the router with this entry replaces the source address with 192.168.1.2. When 10.8.2.12 sends a reply, it changes the destination back to 10.0.0.10. This source NAT is due to a nft masquerade rule:

inet nat postrouting meta oifname "veth0" masquerade

Other types of NAT rules, such as “dnat to” or “redirect to” would be shown in a similar fashion, with the reply tuples destination different from the original one.

Conntrack extensions

Two useful extensions are conntrack accounting and timestamping. “sudo sysctl net.netfilter.nf_conntrack_acct=1” makes “sudo conntrack -L” track byte and packet counters for each flow.

“sudo sysctl net.netfilter.nf_conntrack_timestamp=1” records a “start timestamp” for each connection. “sudo conntrack -L” then displays the seconds elapsed since the flow was first seen. Add “–output ktimestamp” to see the absolute start date as well.

Insert and change entries

You can add entries to the state table. For example:

sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80

This is used by conntrackd for state replication. Entries of an active firewall are replicated to a standby system. The standby system can then take over without breaking connectivity even on established flows. Conntrack can also store metadata not related to the packet data sent on the wire, for example the conntrack mark and connection tracking labels. Change them with the “update” (-U) option:

sudo conntrack -U -m 42 -p tcp

This changes the connmark of all tcp flows to 42.

Delete entries

In some cases, you want to delete enries from the state table. For example, changes to NAT rules have no effect on packets belonging to flows that are already in the table. For long-lived UDP sessions, such as tunneling protocols like VXLAN, it might make sense to delete the entry so the new NAT transformation can take effect. Delete entries via “sudo conntrack -D” followed by an optional list of address and port information. The following example removes the given entry from the table:

sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53

Conntrack error counters

Conntrack also exports statistics:

# sudo conntrack -S cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10 cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0 cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1 cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0

Most counters will be 0. “Found” and “insert” will always be 0, they only exist for backwards compatibility. Other errors accounted for are:

  • invalid: packet does not match an existing connection and doesn’t create a new connection.
  • insert_failed: packet starts a new connection, but insertion into the state table failed. This can happen for example when NAT engine happened to pick identical source address and port when Masquerading.
  • drop: packet starts a new connection, but no memory is available to allocate a new state entry for it.
  • early_drop: conntrack table is full. In order to accept the new connection existing connections that did not see two-way communication were dropped.
  • error: icmp(v6) received icmp error packet that did not match a known connection
  • search_restart: lookup interrupted by an insertion or deletion on another CPU.
  • clash_resolve: Several CPUs tried to insert identical conntrack entry.

These error conditions are harmless unless they occur frequently. Some can be mitigated by tuning the conntrack sysctls for the expected workload. net.netfilter.nf_conntrack_buckets and net.netfilter.nf_conntrack_max are typical candidates. See the nf_conntrack-sysctl documentation for a full list.

Use “sudo sysctl net.netfilter.nf_conntrack_log_invalid=255″ to get more information when a packet is invalid. For example, when conntrack logs the following when it encounters a packet with all tcp flags cleared:

nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0

Summary

This article gave an introduction on how to inspect the connection tracking table and the NAT information stored in tracked flows. The next part in the series will expand on the conntrack tool and the connection tracking event framework.

Fedora Aarch64 on the SolidRun HoneyComb LX2K

Monday 8th of February 2021 08:00:00 AM

Almost a year has passed since the HoneyComb development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware.

First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch.

When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique LX2160A CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual.

Configure ports in switch, LAG, MUX mode, or straight NICs.

I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on Developer-Ecosystem which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch.

For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues.

Now comes the fun part. BIOS is not built-in here. You’ll need to build from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround:

nvme_core.default_ps_max_latency_us=0

In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better.

After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s restool utility to map ports to their usage. In this case just a runtime mapping of dmap -> dni was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments.

Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot:

systemd enable --now dpmac@7.service
systemd enable --now dpmac@8.service
systemd enable --now dpmac@9.service
systemd enable --now dpmac@10.service

Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined.

I’d love to explore the built-in LAG features but this still needs to be coded into the restool options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic.

Energy Efficiency

Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few tuned profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up.

Remote Access

I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into.

X2go KDE session over SSH

Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience.

Cockpit behaves as expected.

All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports).

Storage

As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as Bamboo are actually building server options around this platform as we speak.

The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is.

Network Perf

For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with nmcli and teamdctl. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats.

$ iperf3 -c honeycomb -P 4 --cport 5000 -R Connecting to host honeycomb, port 5201 Reverse mode, remote host honeycomb is sending [ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201 [ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201 [ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201 [ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201 [ ID] Interval Transfer Bitrate [ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec [ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec [SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - (TRUNCATED) - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec [ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec [ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec [ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec [SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender [ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver [ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender [ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver [ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender [ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver [ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender [ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver [SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender [SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver iperf Done Notes on iperf3

I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP.

Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network.

Proper LACP load balancing.

Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great.

Virtualization

What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access.

Virtual Machines in Cockpit Benchmarks

Phoronix has already done quite a few benchmarks on OpenBenchmarking.org but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput.

Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive 2073 MB/s:

Zstd Compression 1.4.5:
   pts/compress-zstd-1.2.1 [Compression Level: 3]
   Test 1 of 1
   Estimated Trial Run Count:    3                      
   Estimated Time To Completion: 9 Minutes [22:41 UTC]  
       Started Run 1 @ 22:33:02
       Started Run 2 @ 22:33:53
       Started Run 3 @ 22:34:37
   Compression Level: 3:
2079.3
2067.5
       2073.9
   Average: 2073.57 MB/s

For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does 2790 MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible.

Conclusion

I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through.

Astrophotography with Fedora Astronomy Lab: setting up

Friday 5th of February 2021 08:00:00 AM

You love astrophotography. You love Fedora Linux. What if you could do the former using the latter? Capturing stunning and awe-inspiring astrophotographs, processing them, and editing them for printing or sharing online using Fedora is absolutely possible! This tutorial guides you through the process of setting up a computer-guided telescope mount, guide cameras, imaging cameras, and other pieces of equipment. A future article will cover capturing and processing data into pleasing images. Please note that while this article is written with certain aspects of the astrophotography process included or omitted based off my own equipment, you can custom-tailor it to fit your own equipment and experience. Let’s capture some photons!

Installing Fedora Astronomy Lab

This tutorial focuses on Fedora Astronomy Lab, so it only makes sense that the first thing we should do is get it installed. But first, a quick introduction: based on the KDE Plasma desktop, Fedora Astronomy Lab includes many pieces of open source software to aid astronomers in planning observations, capturing data, processing images, and controlling astronomical equipment.

Download Fedora Astronomy Lab from the Fedora Labs website. You will need a USB flash-drive with at least eight GB of storage. Once you have downloaded the ISO image, use Fedora Media Writer to write the image to your USB flash-drive. After this is done, boot from the USB drive you just flashed and install Fedora Astronomy Lab to your hard drive. While you can use Fedora Astronomy Lab in a live-environment right from the flash drive, you should install to the hard drive to prevent bottlenecks when processing large amounts of astronomical data.

Configuring your installation

Before you can go capturing the heavens, you need to do some minor setup in Fedora Astronomy Lab.

First of all, you need to add your user to the dialout group so that you can access certain pieces of astronomical equipment from within the guiding software. Do that by opening the terminal (Konsole) and running this command (replacing user with your username):

sudo usermod -a -G dialout user

My personal setup includes a guide camera (QHY5 series, also known as Orion Starshoot) that does not have a driver in the mainline Fedora repositories. To enable it, ypu need to install the qhyccd SDK. (Note that this package is not officially supported by Fedora. Use it at your own risk.) At the time of writing, I chose to use the latest stable release, 20.08.26. Once you download the Linux 64-bit version of the SDK, to extract it:

tar zxvf sdk_linux64_20.08.26.tgz

Now change into the directory you just extracted, change the permissions of the install.sh file to give you execute privileges, and run the install.sh:

cd sdk_linux64_20.08.26 chmod +x install.sh sudo ./install.sh

Now it’s time to install the qhyccd INDI driver. INDI is an open source software library used to control astronomical equipment. Unfortunately, the driver is unavailable in the mainline Fedora repositories, but it is in a Copr repository. (Note: Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.) If you prefer to have the newest (and perhaps unstable!) pieces of astronomy software, you can also enable the “bleeding” repositories at this time by following this guide. For this tutorial, you are only going to enable one repo:

sudo dnf copr enable xsnrg/indi-3rdparty-bleeding

Install the driver by running the following command:

sudo dnf install indi-qhy

Finally, update all of your system packages:

sudo dnf update -y

To recap what you accomplished in this sectio: you added your user to the dialout group, downloaded and installed the qhyccd driver, enabled the indi-3rdparty-bleeding copr, installed the qhyccd-INDI driver with dnf, and updated your system.

Connecting your equipment

This is the time to connect all your equipment to your computer. Most astronomical equipment will connect via USB, and it’s really as easy as plugging each device into your computer’s USB ports. If you have a lot of equipment (mount, imaging camera, guide camera, focuser, filter wheel, etc), you should use an external powered-USB hub to make sure that all connected devices have adequate power. Once you have everything plugged in, run the following command to ensure that the system recognizes your equipment:

lsusb

You should see output similar to (but not the same as) the output here:

You see in the output that the system recognizes the telescope mount (a SkyWatcher EQM-35 Pro) as Prolific Technology, Inc. PL2303 Serial Port, the imaging camera (a Sony a6000) as Sony Corp. ILCE-6000, and the guide camera (an Orion Starshoot, aka QHY5) as Van Ouijen Technische Informatica. Now that you have made sure your system recognizes your equipment, it’s time to open your desktop planetarium and telescope controller, KStars!

Setting up KStars

It’s time to open KStars, which is a desktop planetarium and also includes the Ekos telescope control software. The first time you open KStars, you will see the KStars Startup Wizard.

Follow the prompts to choose your home location (where you will be imaging from) and Download Extra Data…

Setting your location “Download Extra Data” Choosing which catalogs to download

This will allow you to install additional star, nebula, and galaxy catalogs. You don’t need them, but they don’t take up too much space and add to the experience of using KStars. Once you’ve completed this, hit Done in the bottom right corner to continue.

Getting familiar with KStars

Now is a good time to play around with the KStars interface. You are greeted with a spherical image with a coordinate plane and stars in the sky.

This is the desktop planetarium which allows you to view the placement of objects in the night sky. Double-clicking an object selects it, and right clicking on an object gives you options like Center & Track which will follow the object in the planetarium, compensating for sidereal time. Show DSS Image shows a real digitized sky survey image of the selected object.

Another essential feature is the Set Time option in the toolbar. Clicking this will allow you to input a future (or past) time and then simulate the night sky as if that were the current date.

The Set Time button Configuring capture equipment with Ekos

You’re familiar with the KStars layout and some basic functions, so it’s time to move on configuring your equipment using the Ekos observatory controller and automation tool. To open Ekos, click the observatory button in the toolbar or go to Tools > Ekos.

The Ekos button on the toolbar

You will see another setup wizard: the Ekos Profile Wizard. Click Next to start the wizard.

In this tutorial, you have all of our equipment connected directly to your computer. A future article we will cover using an INDI server installed on a remote computer to control our equipment, allowing you to connect over a network and not have to be in the same physical space as your gear. For now though, select Equipment is attached to this device.

You are now asked to name your equipment profile. I usually name mine something like “Local Gear” to differentiate between profiles that are for remote gear, but name your profile what you wish. We will leave the button marked Internal Guide checked and won’t select any additional services. Now click the Create Profile & Select Devices button.

This next screen is where we can select your particular driver to use for each individual piece of equipment. This part will be specific to your setup depending on what gear you use. For this tutorial, I will select the drivers for my setup.

My mount, a SkyWatcher EQM-35 Pro, uses the EQMod Mount under SkyWatcher in the menu (this driver is also compatible with all SkyWatcher equatorial mounts, including the EQ6-R Pro and the EQ8-R Pro). For my Sony a6000 imaging camera, I choose the Sony DSLR under DSLRs under the CCD category. Under Guider, I choose the QHY CCD under QHY for my Orion Starshoot (and any QHY5 series camera). That last driver we want to select will be under the Aux 1 category. We want to select Astrometry from the drop-down window. This will enable the Astrometry plate-solver from within Ekos that will allow our telescope to automatically figure out where in the night sky it is pointed, saving us the time and hassle of doing a one, two, or three star calibration after setting up our mount.

You selected your drivers. Now it’s time to configure your telescope. Add new telescope profiles by clicking on the + button in the lower right. This is essential for computing field-of-view measurements so you can tell what your images will look like when you open the shutter. Once you click the + button, you will be presented with a form where you can enter the specifications of your telescope and guide scope. For my imaging telescope, I will enter Celestron into the Vendor field, SS-80 into the Model field, I will leave the Driver field as None, Type field as Refractor, Aperture as 80mm, and Focal Length as 400mm.

After you enter the data, hit the Save button. You will see the data you just entered appear in the left window with an index number of 1 next to it. Now you can go about entering the specs for your guide scope following the steps above. Once you hit save here, the guide scope will also appear in the left window with an index number of 2. Once all of your scopes are entered, close this window. Now select your Primary and Guide telescopes from the drop-down window.

After all that work, everything should be correctly configured! Click the Close button and complete the final bit of setup.

Starting your capture equipment

This last step before you can start taking images should be easy enough. Click the Play button under Start & Stop Ekos to connect to your equipment.

You will be greeted with a screen that looks similar to this:

When you click on the tabs at the top of the screen, they should all show a green dot next to Connection, indicating that they are connected to your system. On my setup, the baud rate for my mount (the EQMod Mount tab) is set incorrectly, and so the mount is not connected.

This is an easy fix; click on the EQMod Mount tab, then the Connection sub-tab, and then change the baud rate from 9600 to 115200. Now is a good time to ensure the serial port under Ports is the correct serial port for your mount. You can check which port the system has mounted your device on by running the command:

ls /dev | grep USB

You should see ttyUSB0. If there is more than one USB-serial device plugged in at a time, you will see more than one ttyUSB port, with an incrementing following number. To figure out which port is correct. unplug your mount and run the command again.

Now click on the Main Control sub-tab, click Connect again, and wait for the mount to connect. It might take a few seconds, be patient and it should connect.

The last thing to do is set the sensor and pixel size parameters for my camera. Under the Sony DSLR Alpha-A6000 (Control) tab, select the Image Info sub-tab. This is where you can enter your sensor specifications; if you don’t know them, a quick search on the internet will bring you your sensor’s maximum resolution as well as pixel pitch. Enter this data into the right-side boxes, then press the Set button to load them into the left boxes and save them into memory. Hit the Close button when you are done.

Conclusion

Your equipment is ready to use. In the next article, you will learn how to capture and process the images.

Join Fedora at FOSDEM 2021

Monday 1st of February 2021 08:00:00 AM

Every year in Brussels, Belgium, the first weekend of February is dedicated to the Free and Open source Software Developers’ European Meeting (FOSDEM) This is the largest open source, developer-oriented conference of the year. As expected, the conference is going online for the 2021 edition, which gives open source enthusiasts from everywhere the opportunity to attend. You can participate with the Fedora community virtually, too.

The Fedora Project has a long history of attendance at FOSDEM (since 2006) and 2021 will not be an exception. Every year a team of dedicated volunteers, advocates, and ambassadors staff a booth, hand out swag, and answer questions related to the Fedora Project. Although we will miss seeing everyone’s faces in person this year, we are still excited to catch up with friends, old and new.

FOSDEM 2021 will be held on February 6th & 7th and we expressly invite you to “stop by” our virtual booth, check out what’s new in Fedora, and what is planned for the upcoming year. The Fedora Project has its own web page where you can join us and find out what we have planned for the FOSDEM weekend! Visiting a FOSDEM stand this year is little different. Here’s how you can participate with the Fedora community.

Chat

The FOSDEM organizers have created a Fedora room on a dedicated server. You can either join the room directly and create a new Matrix account or join with a pre-existing account.

To join with a pre-existing account:

  • Go to the Element home page
  • Click “Explore more rooms”
  • Click on the dropdown list in order to add the FOSDEM server
  • On the bottom of the dropdown list click “Add a new server”
  • Enter “fosdem.org” and click “Add”
  • Join the Fedora chat
Welcome Sessions

Immediately before the start of each day’s sessions, there will be a “Welcome Session” for each stand from 0830-0900 UTC (9:30-10AM CET). There will be a group of Fedora folks hanging out in our chatroom during those times. Join us in our virtual booth.

Social Hours

Due to pandemic life, Fedorans have been meeting for a weekly social hour since April 2020. We have found them to be a fun way to stay connected, and we want to connect with you at FOSDEM! This is a great opportunity for conference attendees to meet the faces of Fedora. There will be two Social Hours: one on Saturday, 6 February at 1500 UTC and one on Sunday 7 February at 1500 UTC. Join us in our virtual booth!

Raffle

What would FOSDEM be without swag? In order to get that sweet swag into people’s hands this year, we are holding a raffle contest!

Join our chat room to find the raffle entry link. You will need to enter your name and email address to enter. We will randomly draw 100 winners on 8 February at 1700 UTC. The winners will receive a follow up form to provide shipment information. Terms & Conditions are available on the entry form.

Videos

The FOSDEM organizers have created a dedicated space for us to upload some videos to show-case our project. If you haven’t already watched we will have:

  • “State of Fedora” from Matthew Miller from Nest 2020
  • Unboxing of Lenovo laptop running Fedora
  • Fedora 33 Release Party videos
  • Budapest community video from Flock 2019

Manage containers with Podman Compose

Friday 29th of January 2021 08:00:00 AM

Containers are awesome, allowing you to package your application along with its dependencies and run it anywhere. Starting with Docker in 2013, containers have been making the lives of software developers much easier.

One of the downsides of Docker is it has a central daemon that runs as the root user, and this has security implications. But this is where Podman comes in handy. Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux system in root or rootless mode.

There are other articles on Fedora Magazine you can use to learn more about Podman. Two examples follow:

If you have worked with Docker, chances are you also know about Docker Compose, which is a tool for orchestrating several containers that might be interdependent. To learn more about Docker Compose see its documentation.

What is Podman Compose?

Podman Compose is a project whose goal is to be used as an alternative to Docker Compose without needing any changes to be made in the docker-compose.yaml file. Since Podman Compose works using pods, it’s good to check a refresher definition of a pod.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers.

Pods – Kubernetes Documentation

The basic idea behind Podman Compose is that it picks the services defined inside the docker-compose.yaml file and creates a container for each service. A major difference between Docker Compose and Podman Compose is that Podman Compose adds the containers to a single pod for the whole project, and all the containers share the same network. It even names the containers the same way Docker Compose does, using the ‐‐add-host flag when creating the containers, as you will see in the example.

Installation

Complete install instructions for Podman Compose are found on its project page, and there are several ways to do it. To install the latest development version, use the following command:

pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz

Make sure you also have Podman installed since you’ll need it as well. On Fedora, to install Podman use the following command:

sudo dnf install podman Example: launching a WordPress site with Podman Compose

Imagine your docker-compose.yaml file is in a folder called wpsite. A typical docker-compose.yaml (or docker-compose.yml) for a WordPress site looks like this:

version: "3.8" services: web: image: wordpress restart: always volumes: - wordpress:/var/www/html ports: - 8080:80 environment: WORDPRESS_DB_HOST: db WORDPRESS_DB_USER: magazine WORDPRESS_DB_NAME: magazine WORDPRESS_DB_PASSWORD: 1maGazine! WORDPRESS_TABLE_PREFIX: cz WORDPRESS_DEBUG: 0 depends_on: - db networks: - wpnet db: image: mariadb:10.5 restart: always ports: - 6603:3306 volumes: - wpdbvol:/var/lib/mysql environment: MYSQL_DATABASE: magazine MYSQL_USER: magazine MYSQL_PASSWORD: 1maGazine! MYSQL_ROOT_PASSWORD: 1maGazine! networks: - wpnet volumes: wordpress: {} wpdbvol: {} networks: wpnet: {}

If you come from a Docker background, you know you can launch these services by running docker-compose up. Docker Compose will create two containers named wpsite_web_1 and wpsite_db_1, and attaches them to a network called wpsite_wpnet.

Now, see what happens when you run podman-compose up in the project directory. First, a pod is created named after the directory in which the command was issued. Next, it looks for any named volumes defined in the YAML file and creates the volumes if they do not exist. Then, one container is created per every service listed in the services section of the YAML file and added to the pod.

Naming of the containers is done similar to Docker Compose. For example, for your web service, a container named wpsite_web_1 is created. Podman Compose also adds localhost aliases to each named container. Then, containers can still resolve each other by name, although they are not on a bridge network as in Docker. To do this, use the option –add-host. For example, –add-host web:localhost.

Note that docker-compose.yaml includes a port forwarding from host port 8080 to container port 80 for the web service. You should now be able to access your fresh WordPress instance from the browser using the address http://localhost:8080.

WordPress Dashboard Controlling the pod and containers

To see your running containers, use podman ps, which shows the web and database containers along with the infra container in your pod.

CONTAINER ID  IMAGE                               COMMAND               CREATED      STATUS          PORTS                                         NAMES
a364a8d7cec7  docker.io/library/wordpress:latest  apache2-foregroun...  2 hours ago  Up 2 hours ago  0.0.0.0:8080->80/tcp, 0.0.0.0:6603->3306/tcp  wpsite_web_1
c447024aa104  docker.io/library/mariadb:10.5      mysqld                2 hours ago  Up 2 hours ago  0.0.0.0:8080->80/tcp, 0.0.0.0:6603->3306/tcp  wpsite_db_1
12b1e3418e3e  k8s.gcr.io/pause:3.2

You can also verify that a pod has been created by Podman for this project, named after the folder in which you issued the command.

POD ID        NAME             STATUS    CREATED      INFRA ID      # OF CONTAINERS
8a08a3a7773e  wpsite           Degraded  2 hours ago  12b1e3418e3e  3

To stop the containers, enter the following command in another command window:

podman-compose down

You can also do that by stopping and removing the pod. This essentially stops and removes all the containers and then the containing pod. So, the same thing can be achieved with these commands:

podman pod stop podname podman pod rm podname

Note that this does not remove the volumes you defined in docker-compose.yaml. So, the state of your WordPress site is saved, and you can get it back by running this command:

podman-compose up

In conclusion, if you’re a Podman fan and do your container jobs with Podman, you can use Podman Compose to manage your containers in development and production.

Introduction to Thunderbird mail filters

Wednesday 27th of January 2021 08:00:00 AM

Everyone eventually runs into an inbox loaded with messages that they need to sort through. If you are like a lot of people, this is not a fast process. However, use of mail filters can make the task a little less tedious by letting Thunderbird pre-sort the messages into categories that reflect their source, priority, or usefulness. This article is an introduction to the creation of filters in Thunderbird.

Filters may be created for each email account you have created in Thunderbird. These are the accounts you see in the main Thunderbird folder pane shown at the left of the “Classic Layout”.

Classic Layout

There are two methods that can be used to create mail filters for your accounts. The first is based on the currently selected account and the second on the currently selected message. Both are discussed here.

Message destination folder

Before filtering messages there has to be a destination for them. Create the destination by selecting a location to create a new folder. In this example the destination will be Local Folders shown in the accounts pane. Right click on Local Folders and select New Folder… from the menu.

Creating a new folder

Enter the name of the new folder in the menu and select Create Folder. The mail to filter is coming from the New York Times so that is the name entered.

Folder creation Filter creation based on the selected account

Select the Inbox for the account you wish to filter and select the toolbar menu item at Tools > Message_Filters.

Message_Filters menu location


The Message Filters menu appears and is set to your pre-selected account as indicated at the top in the selection menu labelled Filters for:.

Message Filters menu


Previously created filters, if any, are listed beneath the account name in the “Filter Name” column. To the right of this list are controls that let you modify the filters selected. These controls are activated when you select a filter. More on this later.

Start creating your filter as follows:

  1. Verify the correct account has been pre-selected. It may be changed if necessary.
  2. Select New… from the menu list at the right.

When you select New you will see the Filter Rules menu where you define your filter. Note that when using New… you have the option to copy an existing filter to use as a template or to simply duplicate the settings.

Filter rules are made up of three things, the “property” to be tested, the “test”, and the “value” to be tested against. Once the condition is met, the “action” is performed.

Message Filters menu

Complete this filter as follows:

  1. Enter an appropriate name in the textbox labelled Filter name:
  2. Select the property From in the left drop down menu, if not set.
  3. Leave the test set to contains.
  4. Enter the value, in this case the email address of the sender.

Under the Perform these actions: section at the bottom, create an action rule to move the message and choose the destination.

  1. Select Move Messages to from the left end of the action line.
  2. Select Choose Folder… and select Local Folders > New York Times.
  3. Select OK.

By default the Apply filter when: is set to Manually Run and Getting New Mail:. This means that when new mail appears in the Inbox for this account the filter will be applied and you may run it manually at any time, if necessary. There are other options available but they are too numerous to be discussed in this introduction. They are, however, for the most part self explanatory.

If more than one rule or action is to be created during the same session, the “+” to the right of each entry provides that option. Additional property, test, and value entries can be added. If more than one rule is created, make certain that the appropriate option for Match all of the following and Match any of the following is selected. In this example the choice does not matter since we are only setting one filter.

After selecting OK, the Message Filters menu is displayed again showing your newly created filter. Note that the menu items on the right side of the menu are now active for Edit… and Delete.

First filter in the Message Filters menu

Also notice the message “Enabled filters are run automatically in the order shown below”. If there are multiple filters the order is changed by selecting the one to be moved and using the Move to Top, Move Up, Move Down, or Move to Bottom buttons. The order can change the destination of your messages so consider the tests used in each filter carefully when deciding the order.

Since you have just created this filter you may wish to use the Run Now button to run your newly created filter on the Inbox shown to the left of the button.

Filter creation based on a message

An alternative creation technique is to select a message from the message pane and use the Create Filter From Message… option from the menu bar.

In this example the filter will use two rules to select the messages: the email address and a text string in the Subject line of the email. Start as follows:

  1. Select a message in the message page.
  2. Select the filter options on the toolbar at Message > Create Filter From Message….
Create new filters from Messages

The pre-selected message, highlighted in grey in the message pane above, determines the account used and Create Filter From Message… takes you directly to the Filter Rules menu.

The property (From), test (is), and value (email) are pre-set for you as shown in the image above. Complete this filter as follows:

  1. Enter an appropriate name in the textbox labelled Filter name:. COVID is the name in this case.
  2. Check that the property is From.
  3. Verify the test is set to is.
  4. Confirm that the value for the email address is from the correct sender.
  5. Select the “+” to the right of the From rule to create a new filter rule.
  6. In the new rule, change the default property entry From to Subject using the pulldown menu.
  7. Set the test to contains.
  8. Enter the value text to be matched in the Email “Subject” line. In this case COVID.

Since we left the Match all of the following item checked, each message will be from the address chosen AND will have the text COVID in the email subject line.

Now use the action rule to choose the destination for the messages under the Perform these actions: section at the bottom:

  1. Select Move Messages to from the left menu.
  2. Select Choose Folder… and select Local Folders > COVID in Scotland. (This destination was created before this example was started. There was no magic here.)
  3. Select OK.

OK will cause the Message Filters menu to appear, again, verifying that the new filter has been created.

The Message Filters menu

All the message filters you create will appear in the Message Filters menu. Recall that the Message Filters is available in the menu bar at Tools > Message Filters.

Once you have created filters there are several options to manage them. To change a filter, select the filter in question and click on the Edit button. This will take you back to the Filter Rules menu for that filter. As mentioned earlier, you can change the order in which the rules are apply here using the Move buttons. Disable a filter by clicking on the check mark in the Enabled column.

The Run Now button will execute the selected filter immediately. You may also run your filter from the menu bar using Tools > Run Filters on Folder or Tools > Run Filters on Message.

Next step

This article hasn’t covered every feature available for message filtering but hopefully it provides enough information for you to get started. Places for further investigation are the “property”, “test”, and “actions” in the Filter menu as well as the settings there for when your filter is to be run, Archiving, After Sending, and Periodically.

References

Mozilla: Organize Your Messages by Using Filters

MozillaZine: Message Filters

Convert your filesystem to Btrfs

Friday 22nd of January 2021 08:00:00 AM
Introduction

The purpose of this article is to give you an overview about why, and how to migrate your current partitions to a Btrfs filesystem. To read a step-by-step walk through of how this is accomplished – follow along, if you’re curious about doing this yourself.

Starting with Fedora 33, the default filesystem is now Btrfs for new installations. I’m pretty sure that most users have heard about its advantages by now: copy-on-write, built-in checksums, flexible compression options, easy snapshotting and rollback methods. It’s really a modern filesystem that brings new features to desktop storage.

Updating to Fedora 33, I wanted to take advantage of Btrfs, but personally didn’t want to reinstall the whole system for ‘just a filesystem change’. I found [there was] little guidance on how exactly to do it, so decided to share my detailed experience here.

Watch out!

Doing this, you are playing with fire. Hopefully you are not surprised to read the following:

During editing partitions and converting file systems, you can have your data corrupted and/or lost. You can end up with an unbootable system and might be facing data recovery. You can inadvertently delete your partitions or otherwise harm your system.

These conversion procedures are meant to be safe even for production systems – but only if you plan ahead, have backups for critical data and rollback plans. As a sudoer, you can do anything without limits, without any of the usual safety guards protecting you.

The safe way: reinstalling Fedora

Reinstalling your operating system is the ‘official’ way of converting to Btrfs, recommended for most users. Therefore, choose this option if you are unsure about anything in this guide. The steps are roughly the following:

  1. Backup your home folder and any data that might be used in your system like /etc. [Editors note: VM’s too]
  2. Save your list of installed packages to a file.
  3. Reinstall Fedora by removing your current partitions and choosing the new default partitioning scheme with Btrfs.
  4. Restore the contents of your home folder and reinstall the packages using the package list.

For detailed steps and commands, see this comment by a community user at ask.fedoraproject.org. If you do this properly, you’ll end up with a system that is functioning in the same way as before, with minimal risk of losing any data.

Pros and cons of conversion

Let’s clarify this real quick: what kind of advantages and disadvantages does this kind of filesystem conversion have?

The good
  • Of course, no reinstallation is needed! Every file on your system will remain the exact same as before.
  • It’s technically possible to do it in-place i.e. without a backup.
  • You’ll surely learn a lot about btrfs!
  • It’s a rather quick procedure if everything goes according to plan.
The bad
  • You have to know your way around the terminal and shell commands.
  • You can lose data, see above.
  • If anything goes wrong, you are on your own to fix it.
The ugly
  • You’ll need about 20% of free disk space for a successful conversion. But for the complete backup & reinstall scenario, you might need even more.
  • You can customize everything about your partitions during the process, but you can also do that from Anaconda if you choose to reinstall.
What about LVM?

LVM layouts have been the default during the last few Fedora installations. If you have an LVM partition layout with multiple partitions e.g. / and /home, you would somehow have to merge them in order to enjoy all the benefits of Btrfs.

If you choose so, you can individually convert partitions to Btrfs while keeping the volume group. Nevertheless, one of the advantages of migrating to Btrfs is to get rid of the limits imposed by the LVM partition layout. You can also use the send-receive functionality offered by btrfs to merge the partitions after the conversion.

See also on Fedora Magazine: Reclaim hard-drive space with LVM, Recover your files from Btrfs snapshots and Choose between Btrfs and LVM-ext4.

Getting acquainted with Btrfs

It’s advisable to read at least the following to have a basic understanding about what Btrfs is about. If you are unsure, just choose the safe way of reinstalling Fedora.

Must reads Useful resources Conversion steps Create a live image

Since you can’t convert mounted filesystems, we’ll be working from a Fedora live image. Install Fedora Media Writer and ‘burn’ Fedora 33 to your favorite USB stick.

Free up disk space

btrfs-convert will recreate filesystem metadata in your partition’s free disk space, while keeping all existing ext4 data at its current location.

Unfortunately, the amount of free space required cannot be known ahead – the conversion will just fail (and do no harm) if you don’t have enough. Here are some useful ideas for freeing up space:

  • Use baobab to identify large files & folders to remove. Don’t manually delete files outside of your home folder if possible.
  • Clean up old system journals: journalctl –vacuum-size=100M
  • If you are using Docker, carefully use tools like docker volume prune, docker image prune -a
  • Clean up unused virtual machine images inside e.g. GNOME Boxes
  • Clean up unused packages and flatpaks: dnf autoremove, flatpak remove –unused,
  • Clean up package caches: pkcon refresh force -c -1, dnf clean all
  • If you’re confident enough to, you can cautiously clean up the ~/.cache folder.
Convert to Btrfs

Save all your valuable data to a backup, make sure your system is fully updated, then reboot into the live image. Run gnome-disks to find out your device handle e.g. /dev/sda1 (it can look different if you are using LVM). Check the filesystem and do the conversion: [Editors note: The following commands are run as root, use caution!]

$ sudo su - # fsck.ext4 -fyv /dev/sdXX # man btrfs-convert (read it!) # btrfs-convert /dev/sdXX

This can take anywhere from 10 minutes to even hours, depending on the partition size and whether you have a rotational or solid-state hard drive. If you see errors, you’ll likely need more free space. As a last resort, you could try btrfs-convert -n.

How to roll back?

If the conversion fails for some reason, your partition will remain ext4 or whatever it was before. If you wish to roll back after a successful conversion, it’s as simple as

# btrfs-convert -r /dev/sdXX

Warning! You will permanently lose your ability to roll back if you do any of these: defragmentation, balancing or deleting the ext2_saved subvolume.

Due to the copy-on-write nature of Btrfs, you can otherwise safely copy, move and even delete files, create subvolumes, because ext2_saved keeps referencing to the old data.

Mount & check

Now the partition is supposed to have btrfs file system. Mount it and look around your files… and subvolumes!

# mount /dev/sdXX /mnt # man btrfs-subvolume (read it!) # btrfs subvolume list / (-t for a table view)

Because you have already read the relevant manual page, you should now know that it’s safe to create subvolume snapshots, and that you have an ext2-saved subvolume as a handy backup of your previous data.

It’s time to read the Btrfs sysadmin guide, so that you won’t confuse subvolumes with regular folders.

Create subvolumes

We would like to achieve a ‘flat’ subvolume layout, which is the same as what Anaconda creates by default:

toplevel (volume root directory, not to be mounted by default) +-- root (subvolume root directory, to be mounted at /) +-- home (subvolume root directory, to be mounted at /home)

You can skip this step, or decide to aim for a different layout. The advantage of this particular structure is that you can easily create snapshots of /home, and have different compression or mount options for each subvolume.

# cd /mnt # btrfs subvolume snapshot ./ ./root2 # btrfs subvolume create home2 # cp -a home/* home2/

Here, we have created two subvolumes. root2 is a full snapshot of the partition, while home2 starts as an empty subvolume and we copy the contents inside. (This cp command doesn’t duplicate data so it is going to be fast.)

  • In /mnt (the top-level subvolume) delete everything except root2, home2, and ext2_saved.
  • Rename root2 and home2 subvolumes to root and home.
  • Inside root subvolume, empty out the home folder, so that we can mount the home subvolume there later.

It’s simple if you get everything right!

Modify fstab

In order to mount the new volume after a reboot, fstab has to be modified, by replacing the old ext4 mount lines with new ones.

You can use the command blkid to learn your partition’s UUID.

UUID=xx / btrfs subvol=root 0 0 UUID=xx /home btrfs subvol=home 0 0

(Note that the two UUIDs are the same if they are referring to the same partition.)

These are the defaults for new Fedora 33 installations. In fstab you can also choose to customize compression and add options like noatime.

See the relevant wiki page about compression and man 5 btrfs for all relevant options.

Chroot into your system

If you’ve ever done system recovery, I’m pretty sure you know these commands. Here, we get a shell prompt that is essentially inside your system, with network access.

First, we have to remount the root subvolume to /mnt, then mount the /boot and /boot/efi partitions (these can be different depending on your filesystem layout):

# umount /mnt # mount -o subvol=root /dev/sdXX /mnt # mount /dev/sdXX /mnt/boot # mount /dev/sdXX /mnt/boot/efi

Then we can move on to mounting system devices:

# mount -t proc /proc /mnt/proc # mount --rbind /dev /mnt/dev # mount --make-rslave /mnt/dev # mount --rbind /sys /mnt/sys # mount --make-rslave /mnt/sys # cp /mnt/etc/resolv.conf /mnt/etc/resolv.conf.chroot # cp -L /etc/resolv.conf /mnt/etc # chroot /mnt /bin/bash $ ping www.fedoraproject.org Reinstall GRUB & kernel

The easiest way – now that we have network access – is to reinstall GRUB and the kernel because it does all configuration necessary. So, inside the chroot:

# mount /boot/efi # dnf reinstall grub2-efi shim # grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # dnf reinstall kernel-core ...or just renegenerating initramfs: # dracut --kver $(uname -r) --force

This applies if you have an UEFI system. Check the docs below if you have a BIOS system. Let’s check if everything went well, before rebooting:

# cat /boot/grub2/grubenv # cat /boot/efi/EFI/fedora/grub.cfg # lsinitrd /boot/initramfs-$(uname -r).img | grep btrfs

You should have proper partition UUIDs or references in grubenv and grub.cfg (grubenv may not have been updated, edit it if needed) and see insmod btrfs in grub.cfg and btrfs module in your initramfs image.

See also: Reinstalling GRUB 2 and Verifying the Initial RAM Disk Image in the Fedora System Administration Guide.

Reboot

Now your system should boot properly. If not, don’t panic, go back to the live image and fix the issue. In the worst case, you can just reinstall Fedora from right there.

After first boot

Check that everything is fine with your new Btrfs system. If you are happy, you’ll need to reclaim the space used by the old ext4 snapshot, defragment and balance the subvolumes. The latter two might take some time and is quite resource intensive.

You have to mount the top level subvolume for this:

# mount /dev/sdXX -o subvol=/ /mnt/someFolder # btrfs subvolume delete /mnt/someFolder/ext2_saved

Then, run these commands when the machine has some idle time:

# btrfs filesystem defrag -v -r -f / # btrfs filesystem defrag -v -r -f /home # btrfs balance start -m /

Finally, there’s a “no copy-on-write” attribute that is automatically set for virtual machine image folders for new installations. Set it if you are using VMs:

# chattr +C /var/lib/libvirt/images $ chattr +C ~/.local/share/gnome-boxes/images

This attribute only takes effect for new files in these folders. Duplicate the images and delete the originals. You can confirm the result with lsattr.

Wrapping up

I really hope that you have found this guide to be useful, and was able to make a careful and educated decision about whether or not to convert to Btrfs on your system. I wish you a successful conversion process!

Feel free to share your experience here in the comments, or if you run into deeper issues, on ask.fedoraproject.org.

Deploy your own Matrix server on Fedora CoreOS

Monday 18th of January 2021 08:00:00 AM

Today it is very common for open source projects to distribute their software via container images. But how can these containers be run securely in production? This article explains how to deploy a Matrix server on Fedora CoreOS.

What is Matrix?

Matrix provides an open source, federated and optionally end-to-end encrypted communication platform.

From matrix.org:

Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed reference implementations.

Matrix also includes bridges to other popular platforms such as Slack, IRC, XMPP and Gitter. Some open source communities are replacing IRC with Matrix or adding Matrix as an new communication channel (see for example Mozilla, KDE, and Fedora).

Matrix is a federated communication platform. If you host your own server, you can join conversations hosted on other Matrix instances. This makes it great for self hosting.

What is Fedora CoreOS?

From the Fedora CoreOS docs:

Fedora CoreOS is an automatically updating, minimal, monolithic, container-focused operating system, designed for clusters but also operable standalone, optimized for Kubernetes but also great without it.

With Fedora CoreOS (FCOS), you get all the benefits of Fedora (podman, cgroups v2, SELinux) packaged in a minimal automatically updating system thanks to rpm-ostree.

To get more familiar with Fedora CoreOS basics, check out this getting started article:

Getting started with Fedora CoreOS Creating the Fedora CoreOS configuration

Running a Matrix service requires the following software:

This guide will demonstrate how to run all of the above software in containers on the same FCOS server. All the services will all be configured to run under podman container engines.

Assembling the FCCT configuration

Configuring and provisioning these containers on the host requires an Ignition file. FCCT generates the ignition file using a YAML configuration file as input. On Fedora Linux you can install FCCT using dnf:

$ sudo dnf install fcct

A GitHub repository is available for the reader, it contains all the configuration needed and a basic template system to simplify the personnalisation of the configuration. These template values use the %%VARIABLE%% format and each variable is defined in a file named secrets.

User and ssh access

The first configuration step is to define an SSH key for the default user core.

variant: fcos version: 1.3.0 passwd: users: - name: core ssh_authorized_keys: - %%SSH_PUBKEY%% Cgroups v2

Fedora CoreOS comes with cgroups version 1 by default, but it can be configured to use cgroups v2. Using the latest version of cgroups allows for better control of the host resources among other new features.

Switching to cgroups v2 is done via a systemd service that modifies the kernel arguments and reboots the host on first boot.

systemd: units: - name: cgroups-v2-karg.service enabled: true contents: | [Unit] Description=Switch To cgroups v2 After=systemd-machine-id-commit.service ConditionKernelCommandLine=systemd.unified_cgroup_hierarchy ConditionPathExists=!/var/lib/cgroups-v2-karg.stamp [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy ExecStart=/bin/touch /var/lib/cgroups-v2-karg.stamp ExecStart=/bin/systemctl --no-block reboot [Install] WantedBy=multi-user.target Podman pod

Podman supports the creation of pods. Pods are quite handy when you need to group containers together within the same network namespace. Containers within a pod can communicate with each other using the localhost address.

Create and configure a pod to run the different services needed by Matrix:

- name: podmanpod.service enabled: true contents: | [Unit] Description=Creates a podman pod to run the matrix services. After=cgroups-v2-karg.service network-online.target Wants=After=cgroups-v2-karg.service network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=sh -c 'podman pod exists matrix || podman pod create -n matrix -p 80:80 -p 443:443 -p 8448:8448' [Install] WantedBy=multi-user.target

Another advantage of using a pod is that we can control which ports are exposed to the host from the pod as a whole. Here we expose the ports 80, 443 (HTTP and HTTPS) and 8448 (Matrix federation) to the host to make these services available outside of the pod.

A web server with Let’s Encrypt support

The Matrix protocol is HTTP based. Clients connect to their homeserver via HTTPS. Federation between Matrix homeservers is also done over HTTPS. For this setup, you will need three domains. Using distinct domains helps to isolate each service and protect against cross-site scripting (XSS) vulnerabilities.

  • example.tld: The base domain for your homeserver. This will be part of the user Matrix IDs (for example: @username:example.tld).
  • matrix.example.tld: The sub-domain for your Synapse Matrix server.
  • chat.example.tld: The sub-domain for your Element web client.

To simplify the configuration, you only need to set your own domain once in the secrets file.

You will need to ask Let’s Encrypt for certificates on first boot for each of the three domains. Make sure that the domains are configured beforehand to resolve to the IP address that will be assigned to your server. If you do not know what IP address will be assigned to your server in advance, you might want to use another ACME challenge method to get Let’s Encrypt certificates (see DNS Plugins).

- name: certbot-firstboot.service enabled: true contents: | [Unit] Description=Run certbot to get certificates ConditionPathExists=!/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target nginx-http.service Wants=network-online.target Requires=podmanpod.service nginx-http.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ --agree-tos --webroot certonly [Install] WantedBy=multi-user.target

Once the certificates are available, you can start the final instance of nginx. Nginx will act as an HTTPS reverse proxy for your services.

- name: nginx.service enabled: true contents: | [Unit] Description=Run the nginx server After=podmanpod.service network-online.target certbot-firstboot.service Wants=network-online.target Requires=podmanpod.service certbot-firstboot.service [Service] ExecStartPre=/bin/podman pull docker.io/nginx:stable ExecStart=/bin/podman run \ --name=nginx \ --pull=always \ --pod=matrix \ --rm \ --volume /var/srv/matrix/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,z \ --volume /var/srv/matrix/nginx/dhparam:/etc/nginx/dhparam:ro,z \ --volume /var/srv/matrix/letsencrypt-webroot:/var/www:ro,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:ro,z \ --volume /var/srv/matrix/well-known:/var/well-known:ro,z \ docker.io/nginx:stable ExecStop=/bin/podman rm --force --ignore nginx [Install] WantedBy=multi-user.target

Because Let’s Encrypt certificates have a short lifetime, they must be renewed frequently. Set up a system timer to automate their renewal:

- name: certbot.timer enabled: true contents: | [Unit] Description=Weekly check for certificates renewal [Timer] OnCalendar=Sun --* 02:00:00 Persistent=true [Install] WantedBy=timers.target - name: certbot.service enabled: false contents: | [Unit] Description=Let's Encrypt certificate renewal ConditionPathExists=/var/srv/matrix/letsencrypt-certs/archive After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] Type=oneshot ExecStart=/bin/podman run \ --name=certbot \ --pod=matrix \ --rm \ --cap-drop all \ --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \ --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \ docker.io/certbot/certbot:latest \ renew ExecStartPost=/bin/systemctl restart --no-block nginx.service Synapse and database

Finally, configure the Synapse server and PostgreSQL database.

The Synapse server requires a configuration and secrets keys to be generated. Follow the GitHub repository’s README file section to generate those.

Once these steps completed, add a systemd service using podman to run Synapse as a container:

- name: synapse.service enabled: true contents: | [Unit] Description=Run the synapse service. After=podmanpod.service network-online.target Wants=network-online.target Requires=podmanpod.service [Service] ExecStart=/bin/podman run \ --name=synapse \ --pull=always \ --read-only \ --pod=matrix \ --rm \ -v /var/srv/matrix/synapse:/data:z \ docker.io/matrixdotorg/synapse:v1.24.0 ExecStop=/bin/podman rm --force --ignore synapse [Install] WantedBy=multi-user.target

Setting up the PostgreSQL database is very similar. You will also need to provide a POSTGRES_PASSWORD, in the repository’s secrets file and declare a systemd service (check here for all the details).

Setting the Fedora CoreOS host

FCCT provides a storage directive which is useful for creating directories and adding files on the Fedora CoreOS host.

The following configuration makes sure that the configuration needed by each service is present under /var/srv/matrix. Each service has a dedicated directory. For example /var/srv/matrix/nginx and /var/srv/matrix/synapse. These directories are mounted by podman as volumes when the containers are started.

storage: directories: - path: /var/srv/matrix mode: 0700 - path: /var/srv/matrix/synapse/media_store mode: 0777 - path: /var/srv/matrix/postgres - path: /var/srv/matrix/letsencrypt-webroot trees: - local: synapse path: /var/srv/matrix/synapse - local: nginx path: /var/srv/matrix/nginx - local: nginx-http path: /var/srv/matrix/nginx-http - local: letsencrypt-certs path: /var/srv/matrix/letsencrypt-certs - local: well-known path: /var/srv/matrix/well-known - local: element-web path: /var/srv/matrix/element-web files: - path: /etc/postgresql_synapse contents: local: postgresql_synapse mode: 0700 Auto-updates

You are now ready to setup the most powerful part of Fedora CoreOS ‒ auto-updates. On Fedora CoreOS, the system is automatically updated and restarted approximately once every two weeks for each new Fedora CoreOS release. On startup, all containers will be updated to the latest version (because the pull=always option is set). The containers are stateless and volume mounts are used for any data that needs to be persistent across reboots.

The PostgreSQL container is an exception. It can not be fully updated automatically because it requires manual intervention for major releases. However, it will still be updated with new patch releases to fix security issues and bugs as long as the specified version is supported (approximately five years). Be aware that Synapse might start requiring a newer version before support ends. Consequently, you should plan a manual update approximately once per year for new PostgreSQL releases. The steps to update PostgreSQL are documented in this project’s README.

To maximise availability and avoid service interruptions in the middle of the day, set an update strategy in Zincati’s configuration to only allow reboots for updates during certain periods of time. For example, one might want to restrict reboots to week days between 2 AM and 4 AM UTC. Make sure to pick the correct time for your timezone. Fedora CoreOS uses the UTC timezone by default. Here is an example configuration snippet that you could append to your config.yaml:

[updates] strategy = "periodic" [[updates.periodic.window]] days = [ "Mon", "Tue", "Wed", "Thu", "Fri" ] start_time = "02:00" length_minutes = 120 Creating your own Matrix server by using the git repository

Some sections where lightly edited to make this article easier to read but you can find the full, unedited configuration in this GitHub repository. To host your own server from this configuration, fill in the secrets values and generate the full configuration with fcct via the provided Makefile:

$ cp secrets.example secrets ${EDITOR} secrets # Fill in values not marked as generated by synapse

Next, generate the Synapse secrets and include them in the secrets file. Finally, you can build the final configuration with make:

$ make # This will generate the config.ign file

You are now ready to deploy your Matrix homeserver on Fedora CoreOS. Follow the instructions for your platform of choice from the documentation to proceed.

Registering new users

What’s a service without users? Open registration was disabled by default to avoid issues. You can re-enable open registration in the Synapse configuration if you are up for it. Alternatively, even with open registration disabled, it is possible to add new users to your server via the command line:

$ sudo podman run --rm --tty --interactive \ --pod=matrix \ -v /var/srv/matrix/synapse:/data:z,ro \ --entrypoint register_new_matrix_user \ docker.io/matrixdotorg/synapse:latest \ -c /data/homeserver.yaml http://127.0.0.1:8008 Conclusion

You are now ready to join the Matrix federated universe! Enjoy your quickly deployed and automatically updating Matrix server! Remember that auto-updates are made as safe as possible by the fact that if anything breaks, you can either rollback the system to the previous version or use the previous container image to work around any bugs while they are being fixed. Being able to quickly setup a system that will be kept updated and secure is the main advantage of the Fedora CoreOS model.

To go further, take a look at this other article that is taking advantage of Terraform to generate the configuration and directly deploy Fedora CoreOS on your platform of choice.

Deploy Fedora CoreOS servers with Terraform

Discover Fedora Kinoite: a Silverblue variant with the KDE Plasma desktop

Thursday 14th of January 2021 08:00:00 AM

Fedora Kinoite is an immutable desktop operating system featuring the KDE Plasma desktop. In short, Fedora Kinoite is like Fedora Silverblue but with KDE instead of GNOME. It is an emerging variant of Fedora, based on the same technologies as Fedora Silverblue (rpm-ostree, Flatpak, podman) and created exclusively from official RPM packages from Fedora.

Fedora Kinoite is like Silverblue, but what is Silverblue?

From the Fedora Silverblue documentation:

Fedora Silverblue is an immutable desktop operating system. It aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows.

For more details about what makes Fedora Silverblue interesting, read the “What is Fedora Silverblue?” article previously published on Fedora Magazine. Everything in that article also applies to Fedora Kinoite.

Fedora Kinoite status

Kinoite is not yet an official emerging edition of Fedora. But this is in progress and currently planned for Fedora 35. However, it is already usable! Join us if you want to help.

Kinoite is made from the same packages that are available in the Fedora repositories and that go into the classic Fedora KDE Plasma Spin, so it is as functional and stable as classic Fedora. I’m also “eating my own dog food” as it is the only OS installed on the laptop that I am currently using to write this article.

However, be aware that Kinoite does not currently have graphical support for updates. You will need to be confortable with using the command line to manage updates, install Flatpaks, or overlay packages with rpm-ostree.

How to try Fedora Kinoite

As Kinoite is not yet an official Fedora edition, there is not a dedicated installer for it for now. To get started, install Silverblue and switch to Kinoite with the following commands:

# Add the temporary unofficial Kinoite remote $ curl -O https://tim.siosm.fr/downloads/siosm_gpg.pub $ sudo ostree remote add kinoite https://siosm.fr/kinoite/ --gpg-import siosm_gpg.pub # Optional, only if you want to keep Silverblue available $ sudo ostree admin pin 0 # Switch to Kinoite $ sudo rpm-ostree rebase kinoite:fedora/33/x86_64/kinoite # Reboot $ sudo systemctl reboot How to keep it up-to-date

Kinoite does not yet have graphical support for updates in Discover. Work is in progress to teach Discover how to manage an rpm-ostree system. Flatpak management mostly works with Discover but the command line is still the best option for now.

To update the system, use:

$ rpm-ostree update

To update Flatpaks, use:

$ flatpak update Status of KDE Apps Flatpaks

Just like Fedora Silverblue, Fedora Kinoite focuses on applications delivered as Flatpaks. Some non KDE applications are available in the Fedora Flatpak registry, but until this selection is expanded with KDE ones, your best bet is to look for them in Flathub (see all KDE Apps on Flathub). Be aware that applications on Flathub may include non-free or proprietary software. The KDE SIG is working on packaging KDE Apps as Fedora provided Flatpaks but this is not ready yet.

Submitting bug reports

Report issues in the Fedora KDE SIG issue tracker or in the discussion thread at discussion.fedoraproject.org.

Other desktop variants

Although this project started with KDE, I have also already created variants for XFCE, Mate, Deepin, Pantheon, and LXQt. They are currently available from the same remote as Kinoite. Note that they will remain unofficial until someone steps up to maintain them officially in Fedora.

I have also created an additional smaller Base variant without any desktop environment. This allows you to overlay the lightweight window manager of your choice (i3, Sway, etc.). The same caveats as the ones for other desktop environments apply (currently unofficial and will need a maintainer).

Java development on Fedora Linux

Friday 8th of January 2021 08:00:00 AM

Java is a lot. Aside from being an island of Indonesia, it is a large software development ecosystem. Java was released in January 1996. It is approaching its 25th birthday and it’s still a popular platform for enterprise and casual software development. Many things, from banking to Minecraft, are powered by Java development.

This article will guide you through all the individual components that make Java and how they interact. This article will also cover how Java is integrated in Fedora Linux and how you can manage different versions. Finally, a small demonstration using the game Shattered Pixel Dungeon is provided.

A birds-eye perspective of Java

The following subsections present a quick recap of a few important parts of the Java ecosystem.

The Java language

Java is a strongly typed, object oriented, programming language. Its principle designer is James Gosling who worked at Sun, and Java was officially announced in 1995. Java’s design is strongly inspired by C and C++, but using a more streamlined syntax. Pointers are not present and parameters are passed-by-value. Integers and floats no longer have signed and unsigned variants, and more complex objects like Strings are part of the base definition.

But that was 1995, and the language has seen its ups and downs in development. Between 2006 and 2014, no major releases were made, which led to stagnation and which opened up the market to competition. There are now multiple competing Java-esk languages like Scala, Clojure and Kotlin. A large part of ‘Java’ programming nowadays uses one of these alternative language specifications which focus on functional programming or cross-compilation.

// Java public class Hello { public static void main(String[] args) { println("Hello, world!"); } } // Scala object Hello { def main(args: Array[String]) = { println("Hello, world!") } } // Clojure (defn -main [& args] (println "Hello, world!")) // Kotlin fun main(args: Array<String>) { println("Hello, world!") }

The choice is now yours. You can choose to use a modern version or you can opt for one of the alternative languages if they suit your style or business better.

The Java platform

Java isn’t just a language. It is also a virtual machine to run the language. It’s a C/C++ based application that takes the code, and executes it on the actual hardware. Aside from that, the platform is also a set of standard libraries which are included with the Java Virtual Machine (JVM) and which are written in the same language. These libraries contain logic for things like collections and linked lists, date-times, and security.

And the ecosystem doesn’t stop there. There are also software repositories like Maven and Clojars which contain a considerable amount of usable third-party libraries. There are also special libraries aimed at certain languages, providing extra benefits when used together. Additionally, tools like Apache Maven, Sbt and Gradle allow you to compile, bundle and distribute the application you write. What is important is that this platform works with other languages. You can write your code in Scala and have it run side-by-side with Java code on the same platform.

Last but not least, there is a special link between the Java platform and the Android world. You can compile Java and Kotlin for the Android platform to get additional libraries and tools to work with.

License history

Since 2006, the Java platform is licensed under the GPL 2.0 with a classpath-exception. This means that everybody can build their own Java platform; tools and libraries included. This makes the ecosystem very competitive. There are many competing tools for building, distribution, and development.

Sun ‒ the original maintainer of Java ‒ was bought by Oracle in 2009. In 2017, Oracle changed the license terms of the Java package. This prompted multiple reputable software suppliers to create their own Java packaging chain. Red Hat, IBM, Amazon and SAP now have their own Java packages. They use the OpenJDK trademark to distinguish their offering from Oracle’s version.

It deserves special mention that the Java platform package provided by Oracle is not FLOSS. There are strict license restrictions to Oracle’s Java-trademarked platform. For the remainder of this article, Java refers to the FLOSS edition ‒ OpenJDK.

Finally, the classpath-exception deserves special mention. While the license is GPL 2.0, the classpath-exception allows you to write proprietary software using Java as long as you don’t change the platform itself. This puts the license somewhere in between the GPL 2.0 and the LGPL and it makes Java very suitable for enterprises and commercial activities.

Praxis

If all of that seems quite a lot to take in, don’t panic. It’s 25 years of software history and there is a lot of competition. The following subsections demonstrate using Java on Fedora Linux.

Running Java locally

The default Fedora Workstation 33 installation includes OpenJDK 11. The open source code of the platform is bundled for Fedora Workstation by the Fedora Project’s package maintainers. To see for yourself, you can run the following:

$ java -version

Multiple versions of OpenJDK are available in Fedora Linux’s default repositories. They can be installed concurrently. Use the alternatives command to select which installed version of OpenJDK should be used by default.

$ dnf search openjdk $ alternatives --config java

Also, if you have Podman installed, you can find most OpenJDK options by searching for them.

$ podman search openjdk

There are many options to run Java, both natively and in containers. Many other Linux distributions also come with OpenJDK out of the box. Pkgs.org has a comprehensive list. GNOME Boxes or Virt Manager will be your friend in that case.

To get involved with the Fedora community directly, see their project Wiki.

Alternative configurations

If the Java version you want is not available in the repositories, use SDKMAN to install Java in your home directory. It also allows you to switch between multiple installed versions and it comes with popular CLI tools like Ant, Maven, Gradle and Sbt.

Last but not least, some vendors provide direct downloads for Java. Special mention goes to AdoptOpenJDK which is a collaborative effort among several major vendors to provide simple FLOSS packages and binaries.

Graphical tools

Several integrated development environments (IDEs) are available for Java. Some of the more popular IDEs include:

  • Eclipse: This is free software published and maintained by the Eclipse Foundation. Install it directly from the Fedora Project’s repositories or from Flathub.
  • NetBeans: This is free software published and maintained by the Apache foundation. Install it from their site or from Flathub.
  • IntelliJ IDEA: This is proprietary software but it comes with a gratis community version. It is published by Jet Brains. Install it from their site or from Flathub.

The above tools are themselves written in OpenJDK. They are examples of dogfooding.

Demonstration

The following demonstration uses Shattered Pixel Dungeon ‒ a Java based roque-like which is available on Android, Flathub and others.

First, set up a development environment:

$ curl -s "https://get.sdkman.io" | bash $ source "$HOME/.sdkman/bin/sdkman-init.sh" $ sdk install gradle

Next, close your terminal window and open a new terminal window. Then run the following commands in the new window:

$ git clone https://github.com/00-Evan/shattered-pixel-dungeon.git $ cd shattered-pixel-dungeon $ gradle desktop:debug

Now, import the project in Eclipse. If Eclipse is not already installed, run the following command to install it:

$ sudo dnf install eclipse-jdt

Use Import Projects from File System to add the code of Shattered Pixel Dungeon.

As you can see in the imported resources on the top left, not only do you have the code of the project to look at, but you also have the OpenJDK available with all its resources and libraries.

If this motivates you further, I would like to point you towards the official documentation from Shattered Pixel Dungeon. The Shattered Pixel Dungeon build system relies on Gradle which is an optional extra that you will have to configure manually in Eclipse. If you want to make an Android build, you will have to use Android Studio. Android Studio is a gratis, Google-branded version of IntelliJ IDEA.

Summary

Developing with OpenJDK on Fedora Linux is a breeze. Fedora Linux provides some of the most powerful development tools available. Use Podman or Virt-Manager to easily and securely host server applications. OpenJDK provides a FLOSS means of creating applications that puts you in control of all the application’s components.

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Network address translation part 1 – packet tracing

Monday 4th of January 2021 08:00:00 AM

The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.

Introduction

Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.

Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.

Connection tracking

NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.

Ruleset evaluation tracing

The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.

Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.

Initial setup # nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"

The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.

The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.

The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.

The procedure is similar for iptables users. An equivalent trace rule looks like this:

# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE Obtaining trace events

Users of the native nft tool can just run the nft trace mode:

# nft monitor trace

This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):

trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..

We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:

# iptables --version
iptables v1.8.5 (legacy)

(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg or journalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:

# xtables-monitor --trace

If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.

Example

Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.

You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:

nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"

After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.

trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn Line-by-line trace walkthrough

The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:

trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)

This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).

The next matching rule is

trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)

This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.

IP forwarding

Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:

trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200

This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.

If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.

trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:

trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)

The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet: trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.

Ruleset investigation

The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:

# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}

The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory

As expected, the address-service pair is not in the set. We add it now.

# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"

Run the query command now, it will return the newly added element.

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }" table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } } }

The ssh command should now work and the trace output reflects the change:

trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)

trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept

This shows the packet passes the last hook in the forwarding path – postrouting.

In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.

Summary

This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.

A 2020 love letter to the Fedora community

Thursday 31st of December 2020 08:00:00 AM

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

When I wrote about COVID-19 and the Fedora community all the way back on March 16, it was very unclear how 2020 was going to turn out. I hoped that we’d have everything under control and return to normal soon—we didn’t take our Flock to Fedora in-person conference off the table for another month. Back then, I naively hoped that this would be a short event and that life would return to normal soon. But of course, things got worse, and we had to reimagine Flock as a virtual event on short notice. We weren’t even sure if we’d be able to make our regular Fedora Linux releases on schedule.

Even without the pandemic, 2020 was already destined to be an interesting year. Because Red Hat moved the datacenter where most of Fedora’s servers live, our infrastructure team had to move our servers across the continent. Fedora 33 had the largest planned change set of any Fedora Linux release—and not small things either. We changed the default filesystem for desktop variants to BTRFS and promoted Fedora IoT to an Edition. We also began Fedora ELN—a new process which does a nightly build of Fedora’s development branch in the same configuration Red Hat would use to compose Red Hat Enterprise Linux. And Fedora’s popularity keeps growing, which means more users to support and more new community members to onboard. It’s great to be successful, but we also need to keep up with ourselves!

So, it was already busy. And then the pandemic came along. In many ways, we’re fortunate: we’re already a global community used to distributed work, and we already use chat-based meetings and video calls to collaborate. But it made the datacenter move more difficult. The closure of Red Hat offices meant that some of the QA hardware was inaccessible. We couldn’t gather together in person like we’re used to doing. And of course, we all worried about the safety of our friends and family. Isolation and disruption just plain make everything harder.

I’m always proud of the Fedora community, but this year, even more so. In a time of great stress and uncertainty, we came together and did our best work. Flock to Fedora became Nest With Fedora. Thanks to the heroic effort of Marie Nordin and many others, it was a resounding success. We had way more attendees than we’ve ever had at an in-person Flock, which made our community more accessible to contributors who can’t always join us. And we followed up with our first-ever virtual release party and an online Fedora Women’s Day, both also resounding successes.

And then, we shipped both Fedora 32 and Fedora 33 on time, extending our streak to six releases—three straight years of hitting our targets.

The work we all did has not gone unnoticed. You already know that Lenovo is shipping Fedora Workstation on select laptop models. I’m happy to share that two of the top Linux podcasts have recognized our work—particularly Fedora 33—in their year-end awards. LINUX Unplugged listeners voted Fedora Linux their favorite Linux desktop distribution. Three out of the four Destination Linux hosts chose Fedora as the best distro of the year, specifically citing the exciting work we’ve done on Fedora 33 and the strength of our community. In addition, OMG! Ubuntu! included Fedora 33 in its “5 best Linux distribution releases of 2020” and TechRepublic called Fedora 33 “absolutely fantastic“.

Like everyone, I’m looking ahead to 2021. The next few months are still going to be hard, but the amazing work on mRNA and other new vaccine technology means we have clear reasons to be optimistic. Through this trying year, the Fedora community is stronger than ever, and we have some great things to carry forward into better times: a Nest-like virtual event to compliment Flock, online release parties, our weekly Fedora Social Hour, and of course the CPE team’s great trivia events.

In 2021, we’ll keep doing the great work to push the state of the art forward. We’ll be bold in bringing new features into Fedora Linux. We’ll try new things even when we’re worried that they might not work, and we’ll learn from failures and try again. And we’ll keep working to make our community and our platform inclusive, welcoming, and accessible to all.

To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, meeting chairs, sysadmins, Ask Fedora answerers, D&I team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon.Oh, one more thing! Join us for a Fedora Social Hour New Year’s Eve Special. We’ll meet at 23:30 UTC today in Hopin (the platform we used for Nest and other events). Hope to see you there!

Choose between Btrfs and LVM-ext4

Wednesday 30th of December 2020 08:00:00 AM

Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.

In summary

The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.

What about all the other file systems?

There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.

Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.

Commonalities

Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.

Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.

Advantages of LVM-ext4 Structure of ext4 on LVM

The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.

Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.

The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.

LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.

Multiple devices with LVM

LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.

Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.

The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.

Advantages of Btrfs Btrfs Structure

Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).

Key Btrfs features

Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.

Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.

Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.

All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.

Additional features

Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.

Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.

The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.

Btrfs on LVM

Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.

Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.

The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.

If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.

Wrap up

When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.

Contribute at the Fedora Test Week for Kernel 5.10

Monday 28th of December 2020 08:00:00 AM

The kernel team is working on final integration for kernel 5.10. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, January 04, 2021 through Monday, January 11, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Deploy Fedora CoreOS servers with Terraform

Wednesday 23rd of December 2020 08:00:00 AM

Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.

This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.

Getting started

Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:

Getting started with Fedora CoreOS

Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.

HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.

sudo dnf config-manager --add-repo \ https://rpm.releases.hashicorp.com/fedora/hashicorp.repo sudo dnf install terraform

To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command

sudo dnf install -y awscli aws configure

Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.

Configuring Terraform

In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify the authorized_ssh_key section to use your own.

variant: fcos version: 1.2.0 passwd: users: - name: core ssh_authorized_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.

terraform { required_providers { ct = { source = "poseidon/ct" version = "0.7.1" } aws = { source = "hashicorp/aws" version = "~> 3.0" } } }

Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.

provider "aws" { region = "us-west-2" } provider "ct" {}

The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.

data "ct_config" "config" { content = file("config.yaml") strict = true }

With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.

resource "aws_instance" "server" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.config.rendered }

This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.

Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.

output "instance_ip_addr" { value = aws_instance.server.public_ip }

Alright! You’re ready to create some infrastructure. To deploy the server simply run:

terraform init # Installs the provider dependencies terraform apply # Displays the proposed changes and applies them

Once completed, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!

Updates and immutability

At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.

The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.

Using variables

You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.

As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:

variable "username" { type = string description = "Fedora CoreOS user" default = "core" }

Next, modify the config.yaml file to turn it into a template. When rendered, the ${username} will be replaced.

variant: fcos version: 1.2.0 passwd: users: - name: ${username} ssh_authorized_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Finally, modify the data block to render the template using the templatefile function.

data "ct_config" "config" { content = templatefile( "config.yaml", { username = var.username } ) strict = true }

To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.

Leveraging the dependency graph

Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.

Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.

Consider a system of one load balancer and three web servers as an example.

The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.

Web server configuration

First, create a file web.yaml and add a simple Nginx configuration with a templated message.

variant: fcos version: 1.2.0 systemd: units: - name: nginx.service enabled: true contents: | [Unit] Description=Nginx Web Server After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill nginx ExecStartPre=-/bin/podman rm nginx ExecStartPre=/bin/podman pull nginx ExecStart=/bin/podman run --name nginx -p 80:80 -v /etc/nginx/index.html:/usr/share/nginx/html/index.html:z nginx [Install] WantedBy=multi-user.target storage: directories: - path: /etc/nginx files: - path: /etc/nginx/index.html mode: 0444 contents: inline: | <html> <h1>Hello from Server ${count}</h1> </html>

In main.tf, you can create three web servers using this template with the following blocks:

data "ct_config" "web" { count = 3 content = templatefile( "web.yaml", { count = count.index } ) strict = true } resource "aws_instance" "web" { count = 3 ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.web[count.index].rendered }

Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.

Load balancer configuration

The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:

variant: fcos version: 1.2.0 systemd: units: - name: haproxy.service enabled: true contents: | [Unit] Description=Haproxy Load Balancer After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill haproxy ExecStartPre=-/bin/podman rm haproxy ExecStartPre=/bin/podman pull haproxy ExecStart=/bin/podman run --name haproxy -p 80:8080 -v /etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy [Install] WantedBy=multi-user.target storage: directories: - path: /etc/haproxy files: - path: /etc/haproxy/haproxy.cfg mode: 0444 contents: inline: | global log stdout format raw local0 defaults mode tcp log global option tcplog frontend http bind *:8080 default_backend http backend http balance roundrobin %{ for name, addr in servers ~} server ${name} ${addr}:80 check %{ endfor ~}

The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.

data "ct_config" "lb" { content = templatefile( "lb.yaml", { servers = zipmap( aws_instance.web.*.id, aws_instance.web.*.public_ip ) } ) strict = true } resource "aws_instance" "lb" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro user_data = data.ct_config.lb.rendered }

Finally, add an output block to display the IP address of the load balancer.

output "load_balancer_ip" { value = aws_instance.lb.public_ip }

All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.

$ export LB={{load balancer IP here}} $ curl $LB <html> <h1>Hello from Server 0</h1> </html> $ curl $LB <html> <h1>Hello from Server 1</h1> </html> $ curl $LB <html> <h1>Hello from Server 2</h1> </html>

Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.

Clean up

You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.

Where next?

Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.

4 cool new projects to try in COPR from December 2020

Monday 21st of December 2020 08:00:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.

Blanket

Blanket is an application for playing background sounds, which may potentially improve your focus and increase your productivity. Alternatively, it may help you relax and fall asleep in a noisy environment. No matter what time it is or where you are, Blanket allows you to wake up while birds are chirping, work surrounded by friendly coffee shop chatter or distant city traffic, and then sleep like a log next to a fireplace while it is raining outside. Other popular choices for background sounds such as pink and white noise are also available.

Installation instructions

The repo currently provides Blanket for Fedora 32 and 33. To install it, use these commands:

sudo dnf copr enable tuxino/blanket sudo dnf install blanket k9s

k9s is a command-line tool for managing Kubernetes clusters. It allows you to list and interact with running pods, read their logs, dig through used resources, and overall make the Kubernetes life easier. With its extensibility through plugins and customizable UI, k9s is welcoming to power-users.

For many more preview screenshots, please see the project page.

Installation instructions

The repo currently provides k9s for Fedora 32, 33, and Fedora Rawhide as well as EPEL 7, 8, Centos Stream, and others. To install it, use these commands:

sudo dnf copr enable luminoso/k9s sudo dnf install k9s rhbzquery

rhbzquery is a simple tool for querying the Fedora Bugzilla instance. It provides an interface for specifying the search query but it doesn’t list results in the command-line. Instead, rhbzquery generates a Bugzilla URL and opens it in a web browser.

Installation instructions

The repo currently provides rhbzquery for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable petersen/rhbzquery sudo dnf install rhbzquery gping

gping is a more visually intriguing alternative to the standard ping command, as it shows results in a graph. It is also possible to ping multiple hosts at the same time to easily compare their response times.

Installation instructions

The repo currently provides gping for Fedora 32, 33, and Fedora Rawhide as well as for EPEL 7 and 8. To install it, use these commands:

sudo dnf copr enable atim/gping sudo dnf install gping

Using pods with Podman on Fedora

Friday 18th of December 2020 08:00:00 AM

This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track.

A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.

From the official Podman documentation at http://docs.podman.io/en/latest/ Why should you switch to Podman?

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.

To install podman, run this command using sudo:

sudo dnf -y install podman Creating a pod

To start using the pod we first need to create it and for that we have a basic command structure

$ podman pod create
d2a5d381247c8677bb8b0907261c119c8644e3fb06235d0aafcb27ec32d89f48

The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.

$ podman pod create --name climoiselle e65767428fa0be2a3275c59542f58f5b5a2b0ce929598e9d78128a8846c28493

The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:

$ podman pod list POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID e65767428fa0 climoiselle Created 19 seconds ago 1 d74fb8bf66e7 d2a5d381247c blissful_dewdney Created 32 minutes ago 1 3185af079c26

As you can see, there are two pods listed here, one named blissful_dewdney and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we didn’t deploy a container to the pods yet.

What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.

You can also view the individual containers within a pod with the command:

$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME d74fb8bf66e7 k8s.gcr.io/pause:3.2 38 seconds ago Created e65767428fa0-infra e65767428fa0 climoiselle 3185af079c26 k8s.gcr.io/pause:3.2 32 minutes ago Created d2a5d381247c-infra d2a5d381247c blissful_dewdney Add a container

The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official Fedora image and deploy a container that uses it to run the bash shell.

$ podman run -it --rm --pod climoiselle fedora /bin/bash

When finished, type exit or hit Ctrl+D to leave the shell running in the container.

Everything in a single command

Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.

$ podman run -dt --pod new:test_server -p 8080:80 nginx Trying to pull registry.fedoraproject.org/nginx... manifest unknown: manifest unknown Trying to pull registry.access.redhat.com/nginx... unsupported: This repo requires terms acceptance and is only available on registry.redhat.io Trying to pull registry.centos.org/nginx... manifest unknown: manifest unknown Trying to pull docker.io/library/nginx... Getting image source signatures Copying blob e05167b6a99d done Copying blob 2766c0bf2b07 done Copying blob 70ac9d795e79 done Copying blob 6ec7b7d162b2 done Copying blob cb420a90068e done Copying config ae2feff98a done Writing manifest to image destination Storing signatures 7cb4336ccc26835750f23b412bcb9270b6f5b0d1a4477abc45cdc12308bfe961

Let’s check all pods that have been created and the number of containers running in each of them …

$ podman pod list POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 7495cc9c7d93 test_server Running 2 minutes ago 2 6bd313bbfb0d e65767428fa0 climoiselle Created 11 minutes ago 1 d74fb8bf66e7 d2a5d381247c blissful_dewdney Created 43 minutes ago 1 3185af079c26

Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:

podman pod inspect [pod's name/id] Make it stop!

To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.

$ podman pod stop test_server 7495cc9c7d93e0753b4473ad4f2478acfc70d5afd12db2f3e315773f2df30c3f

After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.

Further reading

Ben Cotton: How Do You Fedora?

Monday 14th of December 2020 08:00:00 AM

We recently interviewed Ben Cotton on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming an interviewee.

Who is Ben Cotton?

If you follow the Fedora’ Community Blog, there’s a good chance you already know who Ben is. 

Ben’s Linux journey started around late 2002. Frustrated with some issues on using Windows XP, and starting a new application administrator role at his university where some services were being run on FreeBSD. A friend introduced him to Red Hat Linux, when Ben decided it made sense to get more practice with Unix-like operating systems. He switched to Fedora full-time in 2006, after he landed a job as a Linux system administrator.

Since then, his career has included system administration, people management, support engineering, development, and marketing. Several years ago, he even earned a Master’s degree in IT Project Management. The variety of experience has helped Ben learn how to work with different groups of people. “A lot of what I’ve learned has come from making mistakes. When you mess up communication, you hopefully do a better job the next time.”

Besides tech, Ben also has a range of well-rounded interests. “I used to do a lot of short fiction writing, but these days I mostly write my opinions about whatever is on my mind.” As for favorite foods, he claims “All of it. Feed me.”

Additionally, Ben has taste that spans genres. His childhood hero was a character from the science fiction series “Star Trek: The Next Generation”. “As a young lad, I wanted very much to be Wesley Crusher.” His favorite movies are a parody film and a spy thriller: “‘Airplane!’ and ‘The Hunt for Red October’” respectively. 

When asked for the five greatest qualities he thinks someone can possess, Ben responded cleverly: “Kindness. Empathy. Curiosity. Resilience. Red hair.”

Ben wearing the official “#action bcotton” shirt

His Fedora Story

As a talented writer who described himself as “not much of a programmer”, he selected the Fedora Docs team in 2009 as an entry point into the community. What he found was that “the Friends foundation was evident.” At the time, he wasn’t familiar with tools such as Git, DocBook XML, or Publican (docs toolchain at the time). The community of experienced doc writers helped him get on his feet and freely gave their time. To this day, Ben considers many of them to be his friends and feels really lucky to work with them. Notably “jjmcd, stickster, sparks, and jsmith were a big part of the warm welcome I received.”

Today, as a senior program manager, he describes his job as “Chief Cat Herding Officer”- as his job is largely composed of seeing what different parts of the project are doing and making sure they’re all heading in the same general direction. 

Despite having a huge responsibility, Ben also helps a lot in his free time with tasks outside of his job duties, like website work, CommBlog and Magazine editing, packaging, etc… none of which are his core job responsibilities. He tries to find ways to contribute that match his skills and interests. Building credibility, paying attention, developing relationships with other contributors, and showing folks that he’s able to help, is much more important to him than what his “official” title is. 

When thinking towards the future, Ben feels hopeful watching the Change proposals come in. “Sometimes they get rejected, but that’s to be expected when you’re trying to advance the state of the art. Fedora contributors are working hard to push the project forward.“

The Fedora Community 

As a longtime member of the community, Ben has various notions about the Fedora Project that have been developed over the years. For starters, he wants to make it easier to bring new contributors on board. He believes the Join SIG has “done tremendous work in this area”, but new contributors will keep the community vibrant. 

If Ben had to pick a best moment, he’d choose Flock 2018 in Dresden. “That was my first Fedora event and it was great to meet so many of the people who I’ve only known online for a decade.” 

As for bad moments, Ben hasn’t had many. Once he accidentally messed up a Bugzilla query resulting in accidental closure of hundreds of bugs and has dealt with some frustrating mailing list threads, but remains positive, affirming that “frustration is okay.”

To those interested in becoming involved in the Fedora Project, Ben says “Come join us!” There’s something to appeal to almost anyone. “Take the time to develop relationships with the people you meet as you join, because without the Friends foundation, the rest falls apart.”

Pet Peeves

One issue he finds challenging is a lack of documentation. “I’ve learned enough over the years that I can sort of bumble through making code changes to things, but a lot of times it’s not clear how the code ties together.” Ben sees how sparse or nonexistent documentation can be frustrating to newcomers who might not have the knowledge that is assumed.

Another concern Ben has is that the “interesting” parts of technology are changing. “Operating systems aren’t as important to end users as they used to be thanks to the rise of mobile computing and Software-as-a-Service. Will this cause our pool of potential new contributors to decrease?”

Likewise, Ben believes that it’s not always easy to get people to understand why they should care about open source software. “The reasons are often abstract and people don’t see that they’re directly impacted, especially when the alternatives provide more immediate convenience.”

What Hardware?

For work, Ben has a ThinkPad X1 Carbon running Fedora 33 KDE. His personal server/desktop is a machine he put together from parts that runs Fedora 33 KDE. He uses it as a file server, print server, Plex media server, and general-purpose desktop. If he has some spare time to get it started, Ben also has an extra laptop that he wants to start using to test Beta releases and “maybe end up running rawhide on it”.

What Software?

Ben has been a KDE user for a decade. A lot of his work is done in a web browser (Firefox for work stuff, Chrome for personal). He does most of his scripting in Python these days, with some inherited scripts in Perl.

Notable applications that Ben uses include:

  • Cherrytree for note-taking
  • Element for IRC
  • Inkscape and Kdenlive when he needs to edit videos.
  • Vim on the command line and Kate when he wants a GUI

Add storage to your Fedora system with LVM

Monday 7th of December 2020 08:00:00 AM

Sometimes there is a need to add another disk to your system. This is where Logical Volume Management (LVM) comes in handy. The cool thing about LVM is that it’s fairly flexible. There are several ways to add a disk. This article describes one way to do it.

Heads up!

This article does not cover the process of physically installing a new disk drive into your system. Consult your system and disk documentation on how to do that properly.

Important: Always make sure you have backups of important data. The steps described in this article will destroy data if it already exists on the new disk.

Good to know

This article doesn’t cover every LVM feature deeply; the focus is on adding a disk. But basically, LVM has volume groups, made up of one or more partitions and/or disks. You add the partitions or disks as physical volumes. A volume group can be broken down into many logical volumes. Logical volumes can be used as any other storage for filesystems, ramdisks, etc. More information can be found here.

Think of the physical volumes as forming a pool of storage (a volume group) from which you then carve out logical volumes for your system to use directly.

Preparation

Make sure you can see the disk you want to add. Use lsblk prior to adding the disk to see what storage is already available or in use.

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 251:0 0 989M 0 disk [SWAP] vda 252:0 0 20G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 19G 0 part └─fedora_fedora-root 253:0 0 19G 0 lvm /

This article uses a virtual machine with virtual storage. Therefore the device names start with vda for the first disk, vdb for the second, and so on. The name of your device may be different. Many systems will see physical disks as sda for the first disk, sdb for the second, and so on.

Once the new disk has been connected and your system is back up and running, use lsblk again to see the new block device.

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 251:0 0 989M 0 disk [SWAP] vda 252:0 0 20G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 19G 0 part └─fedora_fedora-root 253:0 0 19G 0 lvm / vdb 252:16 0 10G 0 disk

There is now a new device named vdb. The location for the device is /dev/vdb.

$ ls -l /dev/vdb brw-rw----. 1 root disk 252, 16 Nov 24 12:56 /dev/vdb

We can see the disk, but we cannot use it with LVM yet. If you run blkid you should not see it listed. For this and following commands, you’ll need to ensure your system is configured so you can use sudo:

$ sudo blkid /dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01" /dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02" /dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4" /dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap" Add the disk to LVM

Initialize the disk using pvcreate. You need to pass the full path to the device. In this example it is /dev/vdb; on your system it may be /dev/sdb or another device name.

$ sudo pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created.

You should see the disk has been initialized as an LVM2_member when you run blkid:

$ sudo blkid /dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01" /dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02" /dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4" /dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap" /dev/vdb: UUID="4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE" TYPE="LVM2_member"

You can list all physical volumes currently available using pvs:

$ sudo pvs PV VG Fmt Attr PSize PFree /dev/vda2 fedora_fedora lvm2 a-- <19.00g 0 /dev/vdb lvm2 --- 10.00g 10.00g

/dev/vdb is listed as a PV (phsyical volume), but it isn’t assigned to a VG (Volume Group) yet.

Add the pysical volume to a volume group

You can find a list of available volume groups using vgs:

$ sudo vgs VG #PV #LV #SN Attr VSize VFree fedora_fedora 1 1 0 wz--n- 19.00g 0

In this example, there is only one volume group available. Next, add the physical volume to fedora_fedora:

$ sudo vgextend fedora_fedora /dev/vdb Volume group "fedora_fedora" successfully extended

You should now see the physical volume is added to the volume group:

$ sudo pvs PV VG Fmt Attr PSize PFree /dev/vda2 fedora_fedora lvm2 a– <19.00g 0 /dev/vdb fedora_fedora lvm2 a– <10.00g <10.00g

Look at the volume groups:

$ sudo vgs VG #PV #LV #SN Attr VSize VFree fedora_fedora 2 1 0 wz–n- 28.99g <10.00g

You can get a detailed list of the specific volume group and physical volumes as well:

$ sudo vgdisplay fedora_fedora --- Volume group --- VG Name fedora_fedora System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 28.99 GiB PE Size 4.00 MiB Total PE 7422 Alloc PE / Size 4863 / 19.00 GiB Free PE / Size 2559 / 10.00 GiB VG UUID C5dL2s-dirA-SQ15-TfQU-T3yt-l83E-oI6pkp

Look at the PV:

$ sudo pvdisplay /dev/vdb --- Physical volume --- PV Name /dev/vdb VG Name fedora_fedora PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 2559 Free PE 2559 Allocated PE 0 PV UUID 4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE

Now that we have added the disk, we can allocate space to logical volumes (LVs):

$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root fedora_fedora -wi-ao---- 19.00g

Look at the logical volumes. Here’s a detailed look at the root LV:

$ sudo lvdisplay fedora_fedora/root --- Logical volume --- LV Path /dev/fedora_fedora/root LV Name root VG Name fedora_fedora LV UUID yqc9cw-AvOw-G1Ni-bCT3-3HAa-qnw3-qUSHGM LV Write Access read/write LV Creation host, time fedora, 2020-11-24 11:44:36 -0500 LV Status available LV Size 19.00 GiB Current LE 4863 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

Look at the size of the root filesystem and compare it to the logical volume size.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

The logical volume and the filesystem both agree the size is 19G. Let’s add 5G to the root logical volume:

$ sudo lvresize -L +5G fedora_fedora/root Size of logical volume fedora_fedora/root changed from 19.00 GiB (4863 extents) to 24.00 GiB (6143 extents). Logical volume fedora_fedora/root successfully resized.

We now have 24G available to the logical volume. Look at the / filesystem.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

We are still showing only 19G free. This is because the logical volume is not the same as the filesytem. To use the new space added to the logical volume, resize the filesystem.

$ sudo resize2fs /dev/fedora_fedora/root resize2fs 1.45.6 (20-Mar-2020) Filesystem at /dev/fedora_fedora/root is mounted on /; on-line resizing required old_desc_blocks = 3, new_desc_blocks = 3 The filesystem on /dev/fedora_fedora/root is now 6290432 (4k) blocks long.

Look at the size of the filesystem.

$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_fedora-root 24G 1.4G 21G 7% /

As you can see, the root file system (/) has taken all of the space available on the logical volume and no reboot was needed.

You have now initialized a disk as a physical volume, and extended the volume group with the new physical volume. After that you increased the size of the logical volume, and resized the filesystem to use the new space from the logical volume.

More in Tux Machines

Python: Security and NumPy 1.20 Release

  • Python Package Index nukes 3,653 malicious libraries uploaded soon after security shortcoming highlighted

    The Python Package Index, also known as PyPI, has removed 3,653 malicious packages uploaded days after a security weakness in the use of private and public registries was highlighted. Python developers use PyPI to add software libraries written by other developers in their own projects. Other programming languages implement similar package management systems, all of which demand some level of trust. Developers are often advised to review any code they import from an external library though that advice isn't always followed. Package management systems like npm, PyPI, and RubyGems have all had to remove subverted packages in recent years. Malware authors have found that if they can get their code included in popular libraries or applications, they get free distribution and trust they haven't earned. Last month, security researcher Alex Birsan demonstrated how easy it is to take advantage of these systems through a form of typosquatting that exploited the interplay between public and private package registries.

  • A pair of Python vulnerabilities [LWN.net]

    Two separate vulnerabilities led to the fast-tracked release of Python 3.9.2 and 3.8.8 on February 19, though source-only releases of 3.7.10 and 3.6.13 came a few days earlier. The vulnerabilities may be problematic for some Python users and workloads; one could potentially lead to remote code execution. The other is, arguably, not exactly a flaw in the Python standard library—it simply also follows an older standard—but it can lead to web cache poisoning attacks. [...] [Update: As pointed out in an email from Moritz Muehlenhoff, Python 2.7 actually is affected by this bug. He notes that python2 on Debian 10 ("Buster") is affected and has been updated. Also, Fedora has a fix in progress for its python2.7 package.]

  • NumPy 1.20 has been released

    NumPy is a Python library that adds an array data type to the language, along with providing operators appropriate to working on arrays and matrices. By wrapping fast Fortran and C numerical routines, NumPy allows Python programmers to write performant code in what is normally a relatively slow language. NumPy 1.20.0 was announced on January 30, in what its developers describe as the largest release in the history of the project. That makes for a good opportunity to show a little bit about what NumPy is, how to use it, and to describe what's new in the release. [...] NumPy adds a new data type to Python: the multidimensional ndarray. This a container, like a Python list, but with some crucial differences. A NumPy array is usually homogeneous; while the elements of a list can be of various types, an ndarray will, typically, only contain a single, simple type, such as integers, strings, or floats. However, these arrays can instead contain arbitrary Python objects (i.e. descendants of object). This means that the elements will, for simple data types, all occupy the same amount of space in memory. The elements of an ndarray are laid out contiguously in memory, whereas there is no such guarantee for a list. In this way, they are similar to Fortran arrays. These properties of NumPy arrays are essential for efficiency because the location of each element can be directly calculated. Beyond just adding efficient arrays, NumPy also overloads arithmetic operators to act element-wise on the arrays. This allows the Python programmer to express computations concisely, operating on arrays as units, in many cases avoiding the need to use loops. This does not turn Python into a full-blown array language such as APL, but adds to it a syntax similar to that incorporated into Fortran 90 for array operations.

4 Best Free and Open Source Graphical MPD Clients

MPD is a powerful server-side application for playing music. In a home environment, you can connect an MPD server to a Hi-Fi system, and control the server using a notebook or smartphone. You can, of course, play audio files on remote clients. MPD can be started system-wide or on a per-user basis. MPD runs in the background playing music from its playlist. Client programs communicate with MPD to manipulate playback, the playlist, and the database. The client–server model provides advantages over all-inclusive music players. Clients can communicate with the server remotely over an intranet or over the Internet. The server can be a headless computer located anywhere on a network. There’s graphical clients, console clients and web-based clients. To provide an insight into the quality of software that is available, we have compiled a list of 4 best graphical MPD clients. Hopefully, there will be something of interest here for anyone who wants to listen to their music collection via MPD. Here’s our recommendations. They are all free and open source goodness. Read more

LWN on Kernel: 5.12 Merge, Lockless Algorithms, and opy_file_range()

  • 5.12 Merge window, part 1 [LWN.net]

    The beginning of the 5.12 merge window was delayed as the result of severe weather in the US Pacific Northwest. Once Linus Torvalds got going, though, he wasted little time; as of this writing, just over 8,600 non-merge changesets have been pulled into the mainline repository for the 5.12 release — over a period of about two days. As one might imagine, that work contains a long list of significant changes.

  • An introduction to lockless algorithms [LWN.net]

    Low-level knowledge of the memory model is universally recognized as advanced material that can scare even the most seasoned kernel hackers; our editor wrote (in the July article) that "it takes a special kind of mind to really understand the memory model". It's been said that the Linux kernel memory model (and in particular Documentation/memory-barriers.txt) can be used to frighten small children, and the same is probably true of just the words "acquire" and "release". At the same time, mechanisms like RCU and seqlocks are in such widespread use in the kernel that almost every developer will sooner or later encounter fundamentally lockless programming interfaces. For this reason, it is a good idea to equip yourself with at least a basic understanding of lockless primitives. Throughout this series I will describe what acquire and release semantics are really about, and present five relatively simple patterns that alone can cover most uses of the primitives.

  • How useful should copy_file_range() be? [LWN.net]

    Its job is to copy len bytes of data from the file represented by fd_in to fd_out, observing the requested offsets at both ends. The flags argument must be zero. This call first appeared in the 4.5 release. Over time it turned out to have a number of unpleasant bugs, leading to a long series of fixes and some significant grumbling along the way. In 2019 Amir Goldstein fixed more issues and, in the process, removed a significant limitation: until then, copy_file_range() refused to copy between files that were not located on the same filesystem. After this patch was merged (for 5.3), it could copy between any two files, falling back on splice() for the cross-filesystem case. It appeared that copy_file_range() was finally settling into a solid and useful system call. Indeed, it seemed useful enough that the Go developers decided to use it for the io.Copy() function in their standard library. Then they ran into a problem: copy_file_range() will, when given a kernel-generated file as input, copy zero bytes of data and claim success. These files, which include files in /proc, tracefs, and a large range of other virtual filesystems, generally indicate a length of zero when queried with a system call like stat(). copy_file_range(), seeing that zero length, concludes that there is no data to copy and the job is already done; it then returns success. But there is actually data to be read from this kind of file, it just doesn't show in the advertised length of the file; the real length often cannot be known before the file is actually read. Before 5.3, the prohibition on cross-filesystem copies would have caused most such attempts to return an error code; afterward, they fail but appear to work. The kernel is happy, but some users can be surprisingly stubborn about actually wanting to copy the data they asked to be copied; they were rather less happy.

Banana Pi BPI-M2 Pro is a compact Amlogic S905X3 SBC

Banana Pi has already designed an Amlogic S905X3 SBC with Banana Pi BPI-M5 that closely follows Raspberry Pi 3 Model B form factor, but they’ve now unveiled a more compact model with Banana Pi BPI-M2 Pro that follow the design of the company’ earlier BPI-MP2+ SBC powered by the good old Allwinner H3 processor. BPI-M2 Pro comes with 2GB RAM, 16GB eMMC storage, HDMI video output, Gigabit Ethernet, Wifi & Bluetooth connectivity, as well as two USB 3.0 ports. Read more